129 research outputs found
Sweet sorghum (Sorghum bicolor) biomass, generated from biofuel production, as a reservoir of bioactive compounds for human health
2014 Spring.Includes bibliographical references.To view the abstract, please see the full text of the document
Semi-Automated Analysis of Large Privacy Policy Corpora
Regulators, policy makers, and consumers are interested in proactively identifying services with acceptable or compliant data use policies, privacy policies, and terms of service. Academic requirements engineering researchers and legal scholars have developed qualitative, manual approaches to conducting requirements analysis of policy documents to identify concerns and compare services against preferences or standards. In this research, we develop and present an approach to conducting large-scale, qualitative, prospective analyses of policy documents with respect to the wide-variety of normative concerns found in policy documents. Our approach uses techniques from natural language processing, including topic modeling and summarization. We evaluate our approach in an exploratory case study that attempts to replicate a manual legal analysis of roughly 200 privacy policies from seven domains in a semi-automated fashion at a larger scale. Our findings suggest that this approach is promising for some concerns
Find Unique Usages: Helping Developers Understand Common Usages
When working in large and complex codebases, developers face challenges using
\textit{Find Usages} to understand how to reuse classes and methods. To better
understand these challenges, we conducted a small exploratory study with 4
participants. We found that developers often wasted time reading long lists of
similar usages or prematurely focused on a single usage. Based on these
findings, we hypothesized that clustering usages by the similarity of their
surrounding context might enable developers to more rapidly understand how to
use a function. To explore this idea, we designed and implemented \textit{Find
Unique Usages}, which extracts usages, computes a diff between pairs of usages,
generates similarity scores, and uses these scores to form usage clusters. To
evaluate this approach, we conducted a controlled experiment with 12
participants. We found that developers with Find Unique Usages were
significantly faster, completing their task in 35% less time
Prescribed Optimism: Is it Right to be Wrong About the Future?
We test the assumption that people desire to be accurate when making predictions about their own future. Results revealed that, across four different scenarios and three manipulated variables (commitment to a decision, agency over the decision, and control over outcomes), participants thought it was better to make optimistically biased predictions than accurate or pessimistically biased predictions. Additionally, participants thought that they and others would be optimistic in the scenarios they read, but insufficiently so. We argue that prescriptions can serve as one standard by which the quality of predictions can be judged, and that this particular standard strongly endorses optimism
Imaging Redshift Estimates for BL Lacertae Objects
We have obtained high dynamic range, good natural seeing i' images of BL
Lacertae objects (BL Lacs) to search for the AGN host and thus constrain the
source redshift. These objects are drawn from a sample of bright flat-spectrum
radio sources that are either known (via recent Fermi LAT observations)
gamma-ray emitters or similar sources that might be detected in continuing
gamma-ray observations. All had spectroscopic confirmation as BL Lac sources,
but no redshift solution. We detected hosts for 25/49 objects. As these
galaxies have been argued to be standard candles, our measured host magnitudes
provide redshift estimates (ranging from 0.2--1.0). Lower bounds are
established on the redshifts of non-detections. The mean of the fit redshifts
(and lower limits) is higher than those of spectroscopic solutions in the
radio- and gamma-ray- loud parent samples, suggesting corrections may be needed
for the luminosity function and evolution of these sources.Comment: 15 pages, to appear in the Astrophysical Journa
WIYN Open Cluster Study XI: WIYN 3.5m Deep Photometry of M35 (NGC 2168)
We present deep BVI observations of the core of M35 and a nearby comparison
field obtained at the WIYN 3.5m telescope under excellent seeing. These
observations display the lower main sequence in BV and VI CMDs down to V = 23.3
and 24.6, respectively. At these faint magnitudes background field stars are
far more numerous than the cluster stars, yet by using a smoothing technique
and CMD density distribution subtraction we recover the cluster fiducial main
sequence and luminosity function to V = 24.6. We find the location of the main
sequence in these CMDs to be consistent with earlier work on other open
clusters, specifically NGC 188, NGC 2420, and NGC 2477. We compare these open
cluster fiducial sequences to stellar models by Baraffe et al. (1998), Siess et
al. (2000), Girardi et al. (2000), and Yi et al. (2001) and find that the
models are too blue in both B-V and V-I for stars below ~0.4 Mo. M35 contains
stars to the limit of the extracted main sequence, at M ~ 0.10-0.15 Mo,
suggesting that M35 may harbor a large number of brown dwarfs, which should be
easy targets for near-IR instrumentation on 8-10m telescopes. We also identify
a new candidate white dwarf in M35 at V = 21.36 +- 0.01. Depending on which WD
models are used to interpret this cluster candidate, it is either a very high
mass WD (1.05 +- 0.05 Mo) somewhat older (0.19-0.26 Gyr, 3-4 sigma) than our
best isochrone age (150 Myr), or it is a modestly massive WD (0.67-0.78 Mo)
much too old (0.42-0.83 Gyr) to belong to the cluster.Comment: 28 pages + 24 figures; to be published in the Sept, 2002 A
DEVELOPMENT OF 1000-TON COMBUSTION DRIVEN COMPACTION PRESS FOR MATERIALS DEVELOPMENT AND PROCESSING
ABSTRACT The following presents the technological development aspects and materials processing efforts of UTRON's 1000 ton Combustion Driven Compaction (CDC) press. The 1000 ton press program has evolved from and earlier program in which a 300 ton press was manufactured and teste
Assessing the accuracy of legal implementation readiness decisions
Abstract-Software engineers regularly build systems that are required to comply with laws and regulations. To this end, software engineers must determine which requirements have met or exceeded their legal obligations and which requirements have not. Requirements that have met or exceeded their legal obligations are legally implementation ready, whereas requirements that have not met or exceeded their legal obligations need further refinement. Research is needed to better understand how to support software engineers in making these determinations. In this paper, we describe a case study in which we asked graduate-level software engineering students to assess whether a set of software requirements for an electronic health record system met or exceeded their corresponding legal obligations as expressed in regulations created pursuant to the U.S. Health Insurance Portability and Accountability Act (HIPAA). We compare the assessment made by graduate students with an assessment made by HIPAA compliance subject matter experts. Additionally, we contrast these results with those generated by a legal requirements triage algorithm. Our findings suggest that the average graduatelevel software engineering student is ill-prepared to write legally compliant software with any confidence and that domain experts are an absolute necessity. Our findings also indicate the potential utility of legal requirements metrics in aiding software engineers as they make legal compliance decisions
Defining the Internet of Devices: Privacy and Security Implications
Presented at the 2014 Privacy Law Scholars Conference, hosted by the
George Washington University Law School in Washington, DC, June 2014.What observers have called the Internet of Things (IoT) presents privacy and security challenges for contemporary society. The conceptual model of the IoT evolved rapidly from
technologies used to track parts in industrial supply chain management to a diverse set of smart technologies. This rapid evolution has merged several conceptually distinct technologies into a single, difficult-to-define concept. A key difficulty is defining what constitutes a “thing.” The term has been used to refer both to the things sensed, such as a star or the contents of a refrigerator, and to the things that do the sensing (devices). We argue that the Internet of Things is better conceptualized as an Internet of Devices (IoD) because devices, not things, act in a digital form and connect to the Internet. Along with the other requirements of an effective IoD, technologists and policy makers must develop standards, network protocols, identity management solutions, and governance for the IoD to address privacy and security challenges a priori rather than retrofitted after the fact. Privacy and security cannot easily be added to a system that is already deployed and established. In this paper, we define the IoT and the IoD and summarize the independent technologies from which they have evolved. We provide a five-stage general policy framework for evaluating privacy and security concerns in
the IoD. Our framework seeks to provide a consistent approach to evaluating privacy and security concerns across all IoD technologies while remaining flexible enough to adapt to new technical developments
- …